-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: duplicate record test #47
Conversation
@pnadolny13 We currently only do a merge-upsert when |
Just had a quick scan of the PPW variant and I can't see any indication that they are sorting or explicitly handling unsorted streams. I also don't see any temp-table sorting in |
@kgpayne thanks for the feedback! Is there a case where the
Oh interesting yeah I guess if the stream was unsorted the merge-upsert logic, which is the same in the PPW variant, would have issues too. Maybe thats a safe assumption given how long those PPW have been reliably running in production 🤔 . Especially since the current default target doesnt support this I'm almost leaning towards deferring this until later if theres no clean assumptions we can make. I would rather see the edge case of duplicate records in snowflake rather than accidentally choosing the wrong data as latest. |
@kgpayne I figured out where this is handled in the PPW variant. They iterate records coming in and collect them in a dictionary for batching. The key that they use for the dict is based on the PK if one is supplied or it uses a key based on the incrementing row counter that they have for metrics. In this case if 2 records with the same PK arrive and the key properties are set on the stream then the last to arrive wins, overwriting the first record. I think its safe to replicate this behavior in this target for now while we explore the sorting edge cases. |
Closes #41
The challenge is that we're using a merge statement which is successfully deduplicating against what already exists in the target table but within the batch of records in the stage there are also dupes. The test was failing because no data existed in the destination table so we weren't updating any records, only inserting, but within our staging file we had multiple primary keys ID 1 and 2 so they all get inserting and the result is duplicates in the destination table.
The way I fixed it in this PR is by adding a qualify row_num = 1 to deduplicate within our staging file select query. It uses the SEQ8 function, which I've never used before, to order the records based on their place in the file i.e. the bottom of the table takes precedence over the top. I looks to work as expected but it feels a little sketchy, I wonder if unsorted streams would have issues where the wrong record gets selected. Ideally the user would tell us a sort by column to know how to take the latest.